-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP]: Jenkinsfile: add Testing stage #31
Conversation
@dustymabe can you sync up with me on how to test this? I've ran the commands themselves locally inside a |
@arithx one way might be to run the environment locally as described in https://github.com/coreos/fedora-coreos-pipeline/blob/a8bc0efdf1c6eba0bd75a03d434c74f473536f52/HACKING.md and see if it works there. We can also set up a container in centos ci and run it and see if it works. we should also look to get you access to that cluster. |
Jenkinsfile
Outdated
@@ -47,6 +47,14 @@ podTemplate(cloud: 'openshift', label: 'coreos-assembler', yaml: pod, defaultCon | |||
currentBuild.description = "⚡ ${newBuildID}" | |||
} | |||
|
|||
stage('Test') { | |||
utils.shwrap(""" | |||
latest_build=$(readlink builds/latest) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to escape all the $
in this hunk.
Yeah, HACKING.md should be pretty good now! It does involve some steps to get |
(Also, thanks for this patch! 👍 ) |
a3cc11e
to
9429fb0
Compare
marking as WIP |
This is now blocked on a workaround for coreos/mantle#956. |
Adds a testing stage which will clone down a fork of mantle, build kola/kolet, and test via the unprivileged-qemu platform. Currently does not archive any testing artifacts.
Updated to use the fcos_ci branch of mantle. Untested. |
I guess if one wanted to test this today, they'd use the |
utils.shwrap(""" | ||
latest_build=\$(readlink builds/latest) | ||
qcow=\$(ls builds/"\${latest_build}"/*-"\${latest_build}"-qemu.qcow2) | ||
mantle/bin/kola -p unprivileged-qemu --qemu-image "\${qemu}" -b fcos run | tee |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO something like this should be coreos-assembler kola.
stage('Test') { | ||
// clone & build kola | ||
utils.shwrap(""" | ||
git clone https://github.com/arithx/mantle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah...we need to make this more convenient in cosa.
So, coreos/coreos-assembler#85 was merged now. Rebasing on top of that would be a great opportunity for someone who wants to get familiar with the pipeline and hacking on it! At least as a first step, just running |
I think we should stop using `/srv` as a workdir entirely and just always build in the workspace. The core issue here is that (1) we want to be able to have concurrent builds, and (2) a workdir can't be easily shared today. This also great simplifies the devel vs prod logic which had some funky conditionals around this. So then, how can developers without S3 creds actually *access* built artifacts? We simply archive them as part of the build. This is in line also with coreos#31, where we'll probably be archiving things anyway. Finally, how *can* we use the PVC as cache in a safe way shareable across all the streams? I see two options offhand: 1. as a local RPM mirror: add flags to `cosa fetch` (and maybe `rpm-ostree`) to read & write RPMs in `/srv`, hold a lock to regen metadata 2. as a pkgcache repo: similarly to the above, but also doing the import, so it's just a pkgcache repo; this would probably require teaching rpm-ostree about this, or `cosa fetch` could just blindly import every ref
I think we should stop using `/srv` as a workdir entirely and just always build in the workspace. The core issue here is that (1) we want to be able to have concurrent builds, and (2) a cosa workdir can't be easily shared today. This also great simplifies the devel vs prod logic which had some funky conditionals around this. So then, how can developers without S3 creds actually *access* built artifacts? We simply archive them as part of the build. This is in line also with coreos#31, where we'll probably be archiving things anyway. Finally, how *can* we use the PVC as cache in a safe way shareable across all the streams? I see two options offhand: 1. as a local RPM mirror: add flags to `cosa fetch` (and maybe `rpm-ostree`) to read & write RPMs in `/srv`, hold a lock to regen metadata 2. as a pkgcache repo: similarly to the above, but also doing the import, so it's just a pkgcache repo; this would probably require teaching rpm-ostree about this, or `cosa fetch` could just blindly import every ref
I think we should stop using `/srv` as a workdir entirely and just always build in the workspace. The core issue here is that (1) we want to be able to have concurrent builds, and (2) a cosa workdir can't be easily shared today. This also greatly simplifies the devel vs prod logic which had some funky conditionals around this. So then, how can developers without S3 creds actually *access* built artifacts? We simply archive them as part of the build. This is in line also with coreos#31, where we'll probably be archiving things anyway. Finally, how *can* we use the PVC as cache in a safe way shareable across all the streams? I see two options offhand: 1. as a local RPM mirror: add flags to `cosa fetch` (and maybe `rpm-ostree`) to read & write RPMs in `/srv`, hold a lock to regen metadata 2. as a pkgcache repo: similarly to the above, but also doing the import, so it's just a pkgcache repo; this would probably require teaching rpm-ostree about this, or `cosa fetch` could just blindly import every ref
I think we should stop using `/srv` as a workdir entirely and just always build in the workspace. The core issue here is that (1) we want to be able to have concurrent builds, and (2) a cosa workdir can't be easily shared today. This also simplifies the devel vs prod logic quite a bit since it had some funky conditionals around this. So then, how can developers without S3 creds actually *access* built artifacts? We simply archive them as part of the build. This is in line also with coreos#31, where we'll probably be archiving things anyway. Finally, how *can* we use the PVC as cache in a safe way shareable across all the streams? I see two options offhand: 1. as a local RPM mirror: add flags to `cosa fetch` (and maybe `rpm-ostree`) to read & write RPMs in `/srv`, hold a lock to regen metadata 2. as a pkgcache repo: similarly to the above, but also doing the import, so it's just a pkgcache repo; this would probably require teaching rpm-ostree about this, or `cosa fetch` could just blindly import every ref
I think we should stop using `/srv` as a workdir entirely and just always build in the workspace. The core issue here is that (1) we want to be able to have concurrent builds, and (2) a cosa workdir can't be easily shared today. This also simplifies the devel vs prod logic quite a bit since it had some funky conditionals around this. So then, how can developers without S3 creds actually *access* built artifacts? We simply archive them as part of the build. This is in line also with coreos#31, where we'll probably be archiving things anyway. Finally, how *can* we use the PVC as cache in a safe way shareable across all the streams? I see two options offhand: 1. as a local RPM mirror: add flags and logic to `cosa fetch` to read & write RPMs in `/srv`, hold a lock to regen metadata 2. as a pkgcache repo: similarly to the above, but also doing the import, so it's just a pkgcache repo; this would probably require teaching rpm-ostree about this, or `cosa fetch` could just blindly import every ref
I think we should stop using `/srv` as a workdir entirely and just always build in the workspace. The core issue here is that (1) we want to be able to have concurrent builds, and (2) a cosa workdir can't be easily shared today. This also simplifies the devel vs prod logic quite a bit since it had some funky conditionals around this. So then, how can developers without S3 creds actually *access* built artifacts? We simply archive them as part of the build. This is in line also with coreos#31, where we'll probably be archiving things anyway. Finally, how *can* we use the PVC as cache in a safe way shareable across all the streams? I see two options offhand: 1. as a local RPM mirror: add flags and logic to `cosa fetch` to read & write RPMs in `/srv`, hold a lock to regen metadata 2. as a pkgcache repo: similarly to the above, but also doing the import, so it's just a pkgcache repo; this would probably require teaching rpm-ostree about this, or `cosa fetch` could just blindly import every ref
I think we should stop using `/srv` as a workdir entirely and just always build in the workspace. The core issue here is that (1) we want to be able to have concurrent builds, and (2) a cosa workdir can't be easily shared today. This also simplifies the devel vs prod logic quite a bit since it had some funky conditionals around this. So then, how can developers without S3 creds actually *access* built artifacts? We simply archive them as part of the build. This is in line also with coreos#31, where we'll probably be archiving things anyway. Finally, how *can* we use the PVC as cache in a safe way shareable across all the streams? I see two options offhand: 1. as a local RPM mirror: add flags and logic to `cosa fetch` to read & write RPMs in `/srv`, hold a lock to regen metadata 2. as a pkgcache repo: similarly to the above, but also doing the import, so it's just a pkgcache repo; this would probably require teaching rpm-ostree about this, or `cosa fetch` could just blindly import every ref
I think we should stop using `/srv` as a workdir entirely and just always build in the workspace. The core issue here is that (1) we want to be able to have concurrent builds, and (2) a cosa workdir can't be easily shared today. This also simplifies the devel vs prod logic quite a bit since it had some funky conditionals around this. So then, how can developers without S3 creds actually *access* built artifacts? We simply archive them as part of the build. This is in line also with coreos#31, where we'll probably be archiving things anyway. Finally, how *can* we use the PVC as cache in a safe way shareable across all the streams? I see two options offhand: 1. as a local RPM mirror: add flags and logic to `cosa fetch` to read & write RPMs in `/srv`, hold a lock to regen metadata 2. as a pkgcache repo: similarly to the above, but also doing the import, so it's just a pkgcache repo; this would probably require teaching rpm-ostree about this, or `cosa fetch` could just blindly import every ref
Adds a testing stage which runs all kola tests for the latest build.
Currently does not archive any testing artifacts.